Recognition of facial expression is a challenge when it comes to computer vision. The primary reasons are class imbalance due to data collection and uncertainty due to inherent noise such as fuzzy facial expressions and inconsistent labels. However, current research has focused either on the problem of class imbalance or on the problem of uncertainty, ignoring the intersection of how to address these two problems. Therefore, in this paper, we propose a framework based on Resnet and Attention to solve the above problems. We design weight for each class. Through the penalty mechanism, our model will pay more attention to the learning of small samples during training, and the resulting decrease in model accuracy can be improved by a Convolutional Block Attention Module (CBAM). Meanwhile, our backbone network will also learn an uncertain feature for each sample. By mixing uncertain features between samples, the model can better learn those features that can be used for classification, thus suppressing uncertainty. Experiments show that our method surpasses most basic methods in terms of accuracy on facial expression data sets (e.g., AffectNet, RAF-DB), and it also solves the problem of class imbalance well.
translated by 谷歌翻译
We consider the problem of finding an accurate representation of neuron shapes, extracting sub-cellular features, and classifying neurons based on neuron shapes. In neuroscience research, the skeleton representation is often used as a compact and abstract representation of neuron shapes. However, existing methods are limited to getting and analyzing "curve" skeletons which can only be applied for tubular shapes. This paper presents a 3D neuron morphology analysis method for more general and complex neuron shapes. First, we introduce the concept of skeleton mesh to represent general neuron shapes and propose a novel method for computing mesh representations from 3D surface point clouds. A skeleton graph is then obtained from skeleton mesh and is used to extract sub-cellular features. Finally, an unsupervised learning method is used to embed the skeleton graph for neuron classification. Extensive experiment results are provided and demonstrate the robustness of our method to analyze neuron morphology.
translated by 谷歌翻译
Most Deep Learning (DL) based Compressed Sensing (DCS) algorithms adopt a single neural network for signal reconstruction, and fail to jointly consider the influences of the sampling operation for reconstruction. In this paper, we propose unified framework, which jointly considers the sampling and reconstruction process for image compressive sensing based on well-designed cascade neural networks. Two sub-networks, which are the sampling sub-network and the reconstruction sub-network, are included in the proposed framework. In the sampling sub-network, an adaptive full connected layer instead of the traditional random matrix is used to mimic the sampling operator. In the reconstruction sub-network, a cascade network combining stacked denoising autoencoder (SDA) and convolutional neural network (CNN) is designed to reconstruct signals. The SDA is used to solve the signal mapping problem and the signals are initially reconstructed. Furthermore, CNN is used to fully recover the structure and texture features of the image to obtain better reconstruction performance. Extensive experiments show that this framework outperforms many other state-of-the-art methods, especially at low sampling rates.
translated by 谷歌翻译
Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity. Code and data will be made available at \url{https://jingsenzhu.github.io/invrend}.
translated by 谷歌翻译
We consider a class of Riemannian optimization problems where the objective is the sum of a smooth function and a nonsmooth function, considered in the ambient space. This class of problems finds important applications in machine learning and statistics such as the sparse principal component analysis, sparse spectral clustering, and orthogonal dictionary learning. We propose a Riemannian alternating direction method of multipliers (ADMM) to solve this class of problems. Our algorithm adopts easily computable steps in each iteration. The iteration complexity of the proposed algorithm for obtaining an $\epsilon$-stationary point is analyzed under mild assumptions. To the best of our knowledge, this is the first Riemannian ADMM with provable convergence guarantee for solving Riemannian optimization problem with nonsmooth objective. Numerical experiments are conducted to demonstrate the advantage of the proposed method.
translated by 谷歌翻译
由于单峰生物识别系统的不稳定性和局限性,多模式系统吸引了研究人员的关注。但是,如何利用不同方式之间的独立和互补信息仍然是一个关键和具有挑战性的问题。在本文中,我们提出了一种基于指纹和手指静脉的多模式融合识别算法(指纹手指静脉 - 通道 - 通道空间注意融合模块,FPV-CSAFM)。具体而言,对于每对指纹和手指静脉图像,我们首先提出一个简单有效的卷积神经网络(CNN)来提取特征。然后,我们构建一个多模式融合模块(通道空间注意融合模块,CSAFM),以完全融合指纹和指纹之间的互补信息。与现有的融合策略不同,我们的融合方法可以根据渠道和空间维度不同模态的重要性动态调整融合权重,以便更好地将信息之间的信息更好地结合在一起,并提高整体识别性能。为了评估我们方法的性能,我们在多个公共数据集上进行了一系列实验。实验结果表明,所提出的FPV-CSAFM基于指纹和手指静脉在三个多模式数据集上实现了出色的识别性能。
translated by 谷歌翻译
本文提出了一种延时3D细胞分析的方法。具体而言,我们考虑了准确定位和定量分析亚细胞特征的问题,以及从延时3D共聚焦细胞图像堆栈跟踪单个细胞的问题。细胞的异质性和多维图像的体积提出了对细胞形态发生和发育的完全自动化分析的主要挑战。本文是由路面细胞生长过程和构建定量形态发生模型的动机。我们提出了一种基于深度特征的分割方法,以准确检测和标记每个细胞区域。基于邻接图的方法用于提取分段细胞的亚细胞特征。最后,提出了使用多个单元格特征的基于强大的图形跟踪算法在不同的时间实例中关联单元格。提供了广泛的实验结果,并证明了所提出的方法的鲁棒性。该代码可在GitHub上获得,该方法可通过Bisque Portal作为服务可用。
translated by 谷歌翻译
估计到达时间(ETA)预测时间(也称为旅行时间估计)是针对各种智能运输应用程序(例如导航,路线规划和乘车服务)的基本任务。为了准确预测一条路线的旅行时间,必须考虑到上下文和预测因素,例如空间 - 周期性的互动,驾驶行为和交通拥堵传播的推断。先前在百度地图上部署的ETA预测模型已经解决了时空相互作用(constgat)和驾驶行为(SSML)的因素。在这项工作中,我们专注于建模交通拥堵传播模式以提高ETA性能。交通拥堵的传播模式建模具有挑战性,它需要考虑到随着时间的推移影响区域的影响区域,以及延迟变化随时间变化的累积影响,这是由于道路网络上的流量事件引起的。在本文中,我们提出了一个实用的工业级ETA预测框架,名为Dueta。具体而言,我们基于交通模式的相关性构建了一个对拥堵敏感的图,并开发了一种路线感知图形变压器,以直接学习路段的长距离相关性。该设计使Dueta能够捕获空间遥远但与交通状况高度相关的路段对之间的相互作用。广泛的实验是在从百度地图收集的大型现实世界数据集上进行的。实验结果表明,ETA预测可以从学习的交通拥堵传播模式中显着受益。此外,Dueta已经在Baidu Maps的生产中部署,每天都有数十亿个请求。这表明Dueta是用于大规模ETA预测服务的工业级和强大的解决方案。
translated by 谷歌翻译
数据爆炸和模型尺寸的增加推动了大规模机器学习的显着进步,但也使模型训练时间耗时和模型存储变得困难。为了解决具有较高计算效率和设备限制的分布式模型培训设置中的上述问题,仍然存在两个主要困难。一方面,交换信息的沟通成本,例如,不同工人之间的随机梯度是分布式培训效率的关键瓶颈。另一方面,较少的参数模型容易用于存储和通信,但是损坏模型性能的风险。为了同时平衡通信成本,模型容量和模型性能,我们提出了量化的复合镜下降自适应亚基(QCMD Adagrad),并量化正规化双平均平均自适应亚级别(QRDA ADAGRAD)进行分布式培训。具体来说,我们探讨了梯度量化和稀疏模型的组合,以降低分布式培训中每次迭代的通信成本。构建了基于量化梯度的自适应学习率矩阵,以在沟通成本,准确性和模型稀疏性之间达到平衡。此外,从理论上讲,我们发现大量化误差会引起额外的噪声,从而影响模型的收敛性和稀疏性。因此,在QCMD Adagrad和QRDA Adagrad中采用了具有相对较小误差的阈值量化策略,以提高信噪比并保留模型的稀疏性。理论分析和经验结果都证明了所提出的算法的功效和效率。
translated by 谷歌翻译
由于其高识别精度,包括移动设备的面部解锁,社区访问控制系统和城市监视,因此在许多领域都使用了面部识别技术。由于非常深的网络结构可以保证当前的高精度,因此通常需要将面部图像传输到具有高计算能力以进行推理的第三方服务器。但是,面部图像在视觉上揭示了用户的身份信息。在此过程中,不受信任的服务提供商和恶意用户都可以显着增加个人隐私漏洞的风险。当前的隐私识别方法通常伴随着许多副作用,例如推理时间的显着增加或明显的识别准确性下降。本文提出了使用频域中使用差异隐私的保护隐私面部识别方法。由于利用了差异隐私,它在理论上提供了隐私的保证。同时,准确性的丧失非常小。该方法首先将原始图像转换为频域,并删除称为DC的直接组件。然后,可以根据差异隐私框架内的后端面部识别网络的丢失来学习隐私预算分配方法。最后,它为频域特征添加了相应的噪声。根据广泛的实验,我们的方法在几个经典的面部识别测试集中表现出色。
translated by 谷歌翻译